897 research outputs found

    A Marxist Re-Imagining of Capitalism

    Get PDF
    This paper explores the reasons that Hegel and Smith were optimistic about the promise of capitalism. I identify the virtues and characteristics that capitalism would provide in theory which focused on the potential benefits it would have for average person. I then examine capitalism in practice and identify the ways that Hegel and Smith were wrong. I use Henry Hansmann model of the firm and his account of cost, which fits well with the problems I hope to address. With the help of Henry Hansmann’s account of ownership and cost, I offer potential methods to address the failures of the free market. These methods can be separated into two categories: changes in corporate structure or practices and changes in governing institutions or legislation. I primarily use the examples of Amazon and truck drivers to consider how market failures in both industries could be resolved, which hopefully can provide guidance for reforming other parts of our capitalism

    Energy Efficiency of Software Transactional Memory in a Heterogeneous Architecture

    Get PDF
    Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Active learning in annotating micro-blogs dealing with e-reputation

    Full text link
    Elections unleash strong political views on Twitter, but what do people really think about politics? Opinion and trend mining on micro blogs dealing with politics has recently attracted researchers in several fields including Information Retrieval and Machine Learning (ML). Since the performance of ML and Natural Language Processing (NLP) approaches are limited by the amount and quality of data available, one promising alternative for some tasks is the automatic propagation of expert annotations. This paper intends to develop a so-called active learning process for automatically annotating French language tweets that deal with the image (i.e., representation, web reputation) of politicians. Our main focus is on the methodology followed to build an original annotated dataset expressing opinion from two French politicians over time. We therefore review state of the art NLP-based ML algorithms to automatically annotate tweets using a manual initiation step as bootstrap. This paper focuses on key issues about active learning while building a large annotated data set from noise. This will be introduced by human annotators, abundance of data and the label distribution across data and entities. In turn, we show that Twitter characteristics such as the author's name or hashtags can be considered as the bearing point to not only improve automatic systems for Opinion Mining (OM) and Topic Classification but also to reduce noise in human annotations. However, a later thorough analysis shows that reducing noise might induce the loss of crucial information.Comment: Journal of Interdisciplinary Methodologies and Issues in Science - Vol 3 - Contextualisation digitale - 201

    Transactional memory on heterogeneous architectures

    Get PDF
    Tesis Leida el 9 de Marzo de 2018.Si observamos las necesidades computacionales de hoy, y tratamos de predecir las necesidades del mañana, podemos concluir que el procesamiento heterogéneo estará presente en muchos dispositivos y aplicaciones. El motivo es lógico: algoritmos diferentes y datos de naturaleza diferente encajan mejor en unos dispositivos de cómputo que en otros. Pongamos como ejemplo una tecnología de vanguardia como son los vehículos inteligentes. En este tipo de aplicaciones la computación heterogénea no es una opción, sino un requisito. En este tipo de vehículos se recolectan y analizan imágenes, tarea para la cual los procesadores gráficos (GPUs) son muy eficientes. Muchos de estos vehículos utilizan algoritmos sencillos, pero con grandes requerimientos de tiempo real, que deben implementarse directamente en hardware utilizando FPGAs. Y, por supuesto, los procesadores multinúcleo tienen un papel fundamental en estos sistemas, tanto organizando el trabajo de otros coprocesadores como ejecutando tareas en las que ningún otro procesador es más eficiente. No obstante, los procesadores tampoco siguen siendo dispositivos homogéneos. Los diferentes núcleos de un procesador pueden ofrecer diferentes características en términos de potencia y consumo energético que se adapten a las necesidades de cómputo de la aplicación. Programar este conjunto de dispositivos es una tarea compleja, especialmente en su sincronización. Habitualmente, esta sincronización se basa en operaciones atómicas, ejecución y terminación de kernels, barreras y señales. Con estas primitivas de sincronización básicas se pueden construir otras estructuras más complejas. Sin embargo, la programación de estos mecanismos es tediosa y propensa a fallos. La memoria transaccional (TM por sus siglas en inglés) se ha propuesto como un mecanismo avanzado a la vez que simple para garantizar la exclusión mutua

    Towards a Software Transactional Memory for heterogeneous CPU-GPU processors

    Get PDF
    The heterogeneous Accelerated Processing Units (APUs) integrate a multi-core CPU and a GPU within the same chip. Modern APUs provide the programmer with platform atomics, used to communicate the CPU cores with the GPU using simple atomic datatypes. However, ensuring consistency for complex data types is a task delegated to programmers, who have to implement a mutual exclusion mechanism. Transactional Memory (TM) is an optimistic approach to implement mutual exclusion. With TM, shared data can be accessed by multiple computing threads speculatively, but changes are only visible if a transaction ends with no conflict with others in its memory accesses. TM has been studied and implemented in software and hardware for both CPU and GPU platforms, but an integrated solution has not been provided for APU processors. In this paper we present APUTM, a software TM designed to work on heterogeneous APU processors. The design of APUTM focuses on minimizing the access to shared metadata in order to reduce the communication overhead via expensive platform atomics. The main objective of APUTM is to help us understand the tradeoffs of implementing a sofware TM on an heterogeneous CPU-GPU platform and to identify the key aspects to be considered in each device. In our experiments, we compare the adaptability of APUTM to execute in one of the devices (CPU or GPU) or in both of them simultaneously. These experiments show that APUTM is able to outperform sequential execution of the applications.This work has been supported by projects TIN2013-42253-P and TIN2016-80920-R, from the Spanish Government, P11-TIC8144 and P12- TIC1470, from Junta de Andalucía, and Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Improvements in Hardware Transactional Memory for GPU Architectures

    Get PDF
    In the multi-core CPU world, transactional memory (TM)has emerged as an alternative to lock-based programming for thread synchronization. Recent research proposes the use of TM in GPU architectures, where a high number of computing threads, organized in SIMT fashion, requires an effective synchronization method. In contrast to CPUs, GPUs offer two memory spaces: global memory and local memory. The local memory space serves as a shared scratch-pad for a subset of the computing threads, and it is used by programmers to speed-up their applications thanks to its low latency. Prior work from the authors proposed a lightweight hardware TM (HTM) support based in the local memory, modifying the SIMT execution model and adding a conflict detection mechanism. An efficient implementation of these features is key in order to provide an effective synchronization mechanism at the local memory level. After a quick description of the main features of our HTM design for GPU local memory, in this work we gather together a number of proposals designed with the aim of improving those mechanisms with high impact on performance. Firstly, the SIMT execution model is modified to increase the parallelism of the application when transactions must be serialized in order to make forward progress. Secondly, the conflict detection mechanism is optimized depending on application characteristics, such us the read/write sets, the probability of conflict between transactions and the existence of read-only transactions. As these features can be present in hardware simultaneously, it is a task of the compiler and runtime to determine which ones are more important for a given application. This work includes a discussion on the analysis to be done in order to choose the best configuration solution.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Valoraciones exóticas: El caso de opciones americanas con barreras estocásticas

    Get PDF
    En este documento proponemos una nueva metodología de valoración de opciones americanas con características exóticas mediante la valoración de un portafolio de opciones europeas con diverso vencimiento. Nuestros resultados muestran que: (i) la metodología es numéricamente robusta en la valoración de opciones americanas simples; (ii) las valoraciones del modelo corresponden a las ofertas y primas observadas en las subastas de un conjunto de opciones multidimensionales que integran elementos de opciones trinquete, asiáticas y barrera; y (iii) la forma cerrada de nuestra aproximación permite la derivación de una solución analítica para las griegas de la opción que caracterizan la exposición a diversos factores de riesgo. Finalmente, resaltamos que nuestro modelo requiere menos del 1% del tiempo de ejecución computacional comparado a otros métodos estándar como simulaciones de Monte Carlo.We develop a novel pricing strategy that approximates the value of an American option with exotic features through a portfolio of European options with different maturities. Among our findings, we show that: (i) our model is numerically robust in pricing plain vanilla American options; (ii) the model matches observed bids and premiums of multidimensional options that integrate Ratchet, Asian, and Barrier characteristics; and (iii) our closed-form approximation allows for an analytical solution of the option’s greeks, which characterize the sensitivity to various risk factors. Finally, we highlight that our estimation requires less than 1% of the computational time compared to other standard methods, such as Monte Carlo simulations.Valoraciones exóticas: El caso de opciones americanas con barreras estocásticas Objetivo Proponemos un método de valoración de opciones Americanas con componentes exóticos que permite superar algunos problemas numéricos que encontramos en la implementación de simulaciones de Monte Carlo en el contexto de opciones de volatilidad subastadas por el Banco de la República. Contribución Construimos una estrategia de valoración de opciones Americanas con características exóticas a la que denominamos ponderación del valor temporal. En esta metodología aproximamos el valor de una opción americana a través de un portafolio de opciones europeas, en donde su valor temporal sirve para calibrar los pesos del portafolio. Las opciones estudiadas conjugan elementos exóticos de opciones trinquete, asiáticas, barrera y multidimensional. Resaltamos que el tiempo de computación de nuestro método es inferior al del tiempo requerido por mínimos cuadrados de Monte Carlo, por lo cual, nuestra propuesta puede ser de utilidad para agentes del mercado financiero que requieran una estimación rápida y precisa sobre la prima de una opción. Resultados En primer lugar, la metodología de ponderación del valor temporal es estadísticamente robusta en la valoración de la prima de opciones Americanas con componentes exóticos. En comparación con mínimos cuadrados de Monte Carlo, las primas estimadas por nuestro modelo son más precisas y estables. En segundo lugar, en el caso de las opciones de volatilidad subastadas por el Banco de la República encontramos que el precio estimado es comparable a las ofertas y primas de las subastas realizadas. Adicionalmente, los intermediarios del mercado financiero que toman la posición larga en estas opciones están principalmente expuestos a la volatilidad de la tasa de cambio. FRASE DESTACADA Nuestra propuesta puede ser de utilidad para agentes del mercado financiero que requieran una estimación rápida y precisa sobre la prima de una opción
    corecore